Goto

Collaborating Authors

 book review


Can Michael Pollan crack the problem of consciousness in his new book?

New Scientist

Can Michael Pollan crack the problem of consciousness in his new book? It is one of the most perplexing questions in science. You would expect our intimacy with it to give us a leg up in understanding how it works, but this has proven to be more of a hindrance than a help. So how can you study something objectively when it is also the very tool you are using to do the studying? This conundrum forms the backbone of Michael Pollan's latest book, Pollan's previous works include and The former helped bring the environmental and animal welfare impacts of the US food system to light, while the latter introduced the public to the psychedelic research renaissance.


The robots who predict the future

MIT Technology Review

Three books unpack our infatuation with prediction, and what we lose when we outsource this task to machines. To be human is, fundamentally, to be a forecaster. Trying to see the future, whether through the lens of past experience or the logic of cause and effect, has helped us hunt, avoid hunted, plant crops, forge social bonds, and in general survive in a world that does not prioritize our survival. Indeed, as the tools of divination have changed over the centuries, from tea leaves to data sets, our conviction that the future can be known (and therefore controlled) has only grown stronger. Today, we are awash in a sea of predictions so vast and unrelenting that most of us barely even register them. As I write this sentence, algorithms on some remote server are busy trying to guess my next word based on those I have already typed.


The Good Old Days of Sports Gambling

The New Yorker

Recent memoirs by the retired bookie Art Manteris and the storied gambler Billy Walters provide a glimpse of an industry in its fledgling form--and a preview of the DraftKings era to come. Las Vegas is no longer the seat of the sportsbook gods. In most states, it's now legal, and extremely popular, to place bets using apps or websites such as FanDuel and DraftKings. From your couch, you can wager on everything from the results of snooker championships to the color of the Gatorade poured over the victorious coach after the Super Bowl. The N.F.L., along with the other major-league American sports associations, has officially partnered with sports-betting sites, and their alliance has proved so lucrative that other industries want in on the action; last month, the Golden Globes made a deal with Polymarket, a predictions-market platform, to encourage wagering (or "trading," if you prefer) on the outcomes of its awards race.


The best new popular science books of February 2026

New Scientist

It's nowhere near early enough for those of us in the northern hemisphere to start struggling against winter's somnolent spell, so there's no need for excuses as you take to your bed with a pile of good books. And there's plenty to keep you occupied while you eschew the chilly outdoors. This month, we have climate hope from a well-placed environmental reporter, formerly of this parish, an honest memoir from a star scientist and a jaw-dropping account of the commodification of women's bodies. Given the Valentine's Day fun this month, we also have a book that may challenge what we thought we knew about finding love. It's always good to get all the help we can in that department - enjoy! "On clear moonlit nights we sometimes step outside and howl at the moon together. It is cathartic, primal and a really good laugh. I am not sure what our neighbours think about it, though."


The ascent of the AI therapist

MIT Technology Review

Four new books grapple with a global mental-health crisis and the dawn of algorithmic therapy. A technician adjusts the wiring inside the Mark I Perceptron. This early AI system was designed not by a mathematician but by a psychologist. More than a billion people worldwide suffer from a mental-health condition, according to the World Health Organization. The prevalence of anxiety and depression is growing in many demographics, particularly young people, and suicide is claiming hundreds of thousands of lives globally each year. Given the clear demand for accessible and affordable mental-health services, it's no wonder that people have looked to artificial intelligence for possible relief.


The Ethics of Generative AI

Klenk, Michael

arXiv.org Artificial Intelligence

This chapter discusses the ethics of generative AI. It provides a technical primer to show how generative AI affords experiencing technology as if it were human, and this affordance provides a fruitful focus for the philosophical ethics of generative AI. It then shows how generative AI can both aggravate and alleviate familiar ethical concerns in AI ethics, including responsibility, privacy, bias and fairness, and forms of alienation and exploitation. Finally, the chapter examines ethical questions that arise specifically from generative AI's mimetic generativity, such as debates about authorship and credit, the emergence of as-if social relationships with machines, and new forms of influence, persuasion, and manipulation.


DeformAr: Rethinking NER Evaluation through Component Analysis and Visual Analytics

Younes, Ahmed Mustafa

arXiv.org Artificial Intelligence

Transformer models have significantly advanced Natural Language Processing (NLP), demonstrating strong performance in English. However, their effectiveness in Arabic, particularly for Named Entity Recognition (NER), remains limited, even with larger pre-trained models. This performance gap stems from multiple factors, including tokenisation, dataset quality, and annotation inconsistencies. Existing studies often analyze these issues in isolation, failing to capture their joint effect on system behaviour and performance. We introduce DeformAr (Debugging and Evaluation Framework for Transformer-based NER Systems), a novel framework designed to investigate and explain the performance discrepancy between Arabic and English NER systems. DeformAr integrates a data extraction library and an interactive dashboard, supporting two modes of evaluation: cross-component analysis and behavioural analysis. The framework divides each language into dataset and model components to examine their interactions. The analysis proceeds in two stages. First, cross-component analysis provides systematic diagnostic measures across data and model subcomponents, addressing the "what," "how," and "why" behind observed discrepancies. The second stage applies behavioural analysis by combining interpretability techniques with token-level metrics, interactive visualisations, and representation space analysis. These stages enable a component-aware diagnostic process that detects model behaviours and explains them by linking them to underlying representational patterns and data factors. DeformAr is the first Arabic-specific, component-based interpretability tool, offering a crucial resource for advancing model analysis in under-resourced languages.


How Should the Law Treat Future AI Systems? Fictional Legal Personhood versus Legal Identity

Alexander, Heather J., Simon, Jonathan A., Pinard, Frédéric

arXiv.org Artificial Intelligence

The law draws a sharp distinction between objects and persons, and between two kinds of persons, the ''fictional'' kind (i.e. corporations), and the ''non-fictional'' kind (individual or ''natural'' persons). This paper will assess whether we maximize overall long-term legal coherence by (A) maintaining an object classification for all future AI systems, (B) creating fictional legal persons associated with suitably advanced, individuated AI systems (giving these fictional legal persons derogable rights and duties associated with certified groups of existing persons, potentially including free speech, contract rights, and standing to sue ''on behalf of'' the AI system), or (C) recognizing non-fictional legal personhood through legal identity for suitably advanced, individuated AI systems (recognizing them as entities meriting legal standing with non-derogable rights which for the human case include life, due process, habeas corpus, freedom from slavery, and freedom of conscience). We will clarify the meaning and implications of each option along the way, considering liability, copyright, family law, fundamental rights, civil rights, citizenship, and AI safety regulation. We will tentatively find that the non-fictional personhood approach may be best from a coherence perspective, for at least some advanced AI systems. An object approach may prove untenable for sufficiently humanoid advanced systems, though we suggest that it is adequate for currently existing systems as of 2025. While fictional personhood would resolve some coherence issues for future systems, it would create others and provide solutions that are neither durable nor fit for purpose. Finally, our review will suggest that ''hybrid'' approaches are likely to fail and lead to further incoherence: the choice between object, fictional person and non-fictional person is unavoidable.


XtraGPT: Context-Aware and Controllable Academic Paper Revision

Chen, Nuo, HuiKai, Andre Lin, Wu, Jiaying, Hou, Junyi, Zhang, Zining, Wang, Qian, Wang, Xidong, He, Bingsheng

arXiv.org Artificial Intelligence

Despite the growing adoption of large language models (LLMs) in academic workflows, their capabilities remain limited to support high-quality scientific writing. Most existing systems are designed for general-purpose scientific text generation and fail to meet the sophisticated demands of research communication beyond surface-level polishing, such as conceptual coherence across sections. Furthermore, academic writing is inherently iterative and revision-driven, a process not well supported by direct prompting-based paradigms. To address these scenarios, we propose a human-AI collaboration framework for academic paper revision centered on criteria-guided intent alignment and context-aware modeling. To validate the framework, we curate a dataset of 7,000 research papers from top-tier venues annotated with 140,000 instruction-response pairs that reflect realistic, section-level scientific revisions. We instantiate the framework in XtraGPT, the first suite of open-source LLMs (1.5B to 14B parameters) for context-aware, instruction-guided writing assistance. Extensive experiments validate that XtraGPT significantly outperforms same-scale baselines and approaches the quality of proprietary systems. Both automated preference assessments and human evaluations confirm the effectiveness of XtraGPT in improving scientific drafts.


Intelligent Load Balancing in Cloud Computer Systems

Sliwko, Leszek

arXiv.org Artificial Intelligence

Cloud computing is an established technology allowing users to share resources on a large scale, never before seen in IT history. A cloud system connects multiple individual servers in order to process related tasks in several environments at the same time. Clouds are typically more cost-effective than single computers of comparable computing performance. The sheer physical size of the system itself means that thousands of machines may be involved. The focus of this research was to design a strategy to dynamically allocate tasks without overloading Cloud nodes which would result in system stability being maintained at minimum cost. This research has added the following new contributions to the state of knowledge: (i) a novel taxonomy and categorisation of three classes of schedulers, namely OS-level, Cluster and Big Data, which highlight their unique evolution and underline their different objectives; (ii) an abstract model of cloud resources utilisation is specified, including multiple types of resources and consideration of task migration costs; (iii) a virtual machine live migration was experimented with in order to create a formula which estimates the network traffic generated by this process; (iv) a high-fidelity Cloud workload simulator, based on a month-long workload traces from Google's computing cells, was created; (v) two possible approaches to resource management were proposed and examined in the practical part of the manuscript: the centralised metaheuristic load balancer and the decentralised agent-based system. The project involved extensive experiments run on the University of Westminster HPC cluster, and the promising results are presented together with detailed discussions and a conclusion.